How can we improve rigour and even reproducibility when using AI in social science? This chapter suggests some answers.
A lot of evaluation work is a kind of text analysis: processing reports, interview transcripts, etc. A bit like qualitative social science research. So this little piece is for evaluators in particular and (qualitative) social scientists in general.
I often hear concerns about algorithms and AI, in everyday life as well as in evaluation, taking over our lives or making us submit to decisions made by machines.
Nowadays, people are using AI for text analysis. Many of us worry about AI’s "hidden biases”. What to do about that?
It's strange how often this happens:
I just found myself writing:
Who said philosophy was a waste of time? When I was studying philosophy in the 80s, I was fascinated by John Searle’s Chinese Room Argument, and by Douglas Hofstadter's fantastic book "Gödel, Escher, Bach" which is, amongst other things, a refutation of it.